一些增强算法(例如LPBOOST,ERLPBOOST和C-ERLPBOOST)的目的是通过$ \ ell_1 $ -norm正则化解决软边距优化问题。 LPBoost在实践中迅速收敛到$ \ epsilon $ approximate解决方案,但众所周知,在最坏的情况下,$ \ omega(m)$迭代会在其中$ m $是样本量。另一方面,保证ErlpBoost和C-erlpBoost在$ O(\ frac {1} {\ epsilon^2} \ ln \ ln \ frac {m} {m} {m} {\ nu} {\ nu} {\ nu})中保证将收敛到$ \ epsilon $ -Approximate解决方案$迭代。但是,与LPBoost相比,每次迭代的计算非常高。为了解决这个问题,我们提出了一个通用的增强方案,将Frank-Wolfe算法和任何次要算法结合在一起,并在迭代上互相切换。我们表明该方案保留与ERLPBoost和C-ErlpBoost相同的收敛保证。一个人可以合并任何次要算法以改进实践。该方案来自增强算法以进行软边距优化的统一视图。更具体地说,我们表明lpboost,erlpboost和c-erlpboost是Frank-Wolfe算法的实例。在实际数据集的实验中,我们方案的实例之一可利用次级算法的更好更新,并与LPBOOST相当地执行。
translated by 谷歌翻译
Several techniques to map various types of components, such as words, attributes, and images, into the embedded space have been studied. Most of them estimate the embedded representation of target entity as a point in the projective space. Some models, such as Word2Gauss, assume a probability distribution behind the embedded representation, which enables the spread or variance of the meaning of embedded target components to be captured and considered in more detail. We examine the method of estimating embedded representations as probability distributions for the interpretation of fashion-specific abstract and difficult-to-understand terms. Terms, such as "casual," "adult-casual,'' "beauty-casual," and "formal," are extremely subjective and abstract and are difficult for both experts and non-experts to understand, which discourages users from trying new fashion. We propose an end-to-end model called dual Gaussian visual-semantic embedding, which maps images and attributes in the same projective space and enables the interpretation of the meaning of these terms by its broad applications. We demonstrate the effectiveness of the proposed method through multifaceted experiments involving image and attribute mapping, image retrieval and re-ordering techniques, and a detailed theoretical/analytical discussion of the distance measure included in the loss function.
translated by 谷歌翻译